在工业和服务域中,使用机器人的主要好处是它们快速可靠地执行重复性任务的能力。但是,即使是相对简单的孔洞任务,通常也会受到随机变化的影响,需要搜索运动才能找到相关的功能,例如孔。尽管搜索提高了鲁棒性,但它以增加运行时的成本为代价:更详尽的搜索将最大化成功执行给定任务的可能性,但会大大延迟任何下游任务。根据简单的启发式方法,这种权衡通常由人类专家解决,这些启发式很少是最佳的。本文介绍了一种自动,数据驱动和无启发式方法,以优化机器人搜索策略。通过训练搜索策略的神经模型在一系列模拟随机环境上,在几个现实世界中的示例中进行调节并颠倒模型,我们可以推断出适应了基本概率分布的时间变化特征,同时需要很少的现实测量。在螺旋和探测器搜索电子组件的背景下,我们评估了对两个不同工业机器人的方法。
translated by 谷歌翻译
手术场景的语义分割是机器人辅助干预措施中任务自动化的先决条件。我们提出了LapseG3D,这是一种基于DNN的新方法,用于代表手术场景的点云的素云注释。由于训练数据的手动注释非常耗时,因此我们引入了一条半自治的基于聚类的管道,用于胆囊的注释,该管道用于为DNN生成分段标签。当对手动注释数据进行评估时,LapseG3D在前体猪肝的各种数据集上的胆囊分割达到了0.94的F1得分。我们显示LapseG3D可以准确地跨越具有不同RGB-D摄像机系统记录的不同胆囊和数据集。
translated by 谷歌翻译
可以通过组合单个机器人技能来有效地解决具有挑战性的操纵任务,该技巧必须用于具体的物理环境和手头的任务。对于人类程序员来说,这是耗时的,尤其是针对力控制的技能。为此,我们提出了阴影程序反演(SPI),这是一种直接从数据推断最佳技能参数的新方法。 SPI利用无监督的学习来训练辅助区分程序表示(“影子程序”),并通过基于梯度的模型反转实现参数推断。我们的方法使使用高效的一阶优化器可以推断出最初非差异技能的最佳参数,包括当前生产中使用的许多技能变体。 SPI零射击跨任务目标概括,这意味着不需要对阴影程序进行重新训练来推断不同任务变体的参数。我们在工业和家庭场景中评估了三个不同的机器人和技能框架的方法。代码和示例可在https://innolab.artiminds.com/icra2021上找到。
translated by 谷歌翻译
View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors, from a set of casually-captured input photos. At the core of our method is a neural warp field that models catacaustic trajectories of reflections, so complex specular effects can be rendered using efficient point splatting in conjunction with a neural renderer. One of our key contributions is the explicit representation of reflections with a reflection point cloud which is displaced by the neural warp field, and a primary point cloud which is optimized to represent the rest of the scene. After a short manual annotation step, our approach allows interactive high-quality renderings of novel views with accurate reflection flow. Additionally, the explicit representation of reflection flow supports several forms of scene manipulation in captured scenes, such as reflection editing, cloning of specular objects, reflection tracking across views, and comfortable stereo viewing. We provide the source code and other supplemental material on https://repo-sam.inria.fr/ fungraph/neural_catacaustics/
translated by 谷歌翻译
Brain-inspired computing proposes a set of algorithmic principles that hold promise for advancing artificial intelligence. They endow systems with self learning capabilities, efficient energy usage, and high storage capacity. A core concept that lies at the heart of brain computation is sequence learning and prediction. This form of computation is essential for almost all our daily tasks such as movement generation, perception, and language. Understanding how the brain performs such a computation is not only important to advance neuroscience but also to pave the way to new technological brain-inspired applications. A previously developed spiking neural network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. An emerging type of hardware that holds promise for efficiently running this type of algorithm is neuromorphic hardware. It emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate. Memristive devices have been identified as potential synaptic elements in neuromorphic hardware. In particular, redox-induced resistive random access memories (ReRAM) devices stand out at many aspects. They permit scalability, are energy efficient and fast, and can implement biological plasticity rules. In this work, we study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model. We implement and simulate the model including the ReRAM plasticity using the neural simulator NEST. We investigate the effect of different device properties on the performance characteristics of the sequence learning model, and demonstrate resilience with respect to different on-off ratios, conductance resolutions, device variability, and synaptic failure.
translated by 谷歌翻译
This paper describes several improvements to a new method for signal decomposition that we recently formulated under the name of Differentiable Dictionary Search (DDS). The fundamental idea of DDS is to exploit a class of powerful deep invertible density estimators called normalizing flows, to model the dictionary in a linear decomposition method such as NMF, effectively creating a bijection between the space of dictionary elements and the associated probability space, allowing a differentiable search through the dictionary space, guided by the estimated densities. As the initial formulation was a proof of concept with some practical limitations, we will present several steps towards making it scalable, hoping to improve both the computational complexity of the method and its signal decomposition capabilities. As a testbed for experimental evaluation, we choose the task of frame-level piano transcription, where the signal is to be decomposed into sources whose activity is attributed to individual piano notes. To highlight the impact of improved non-linear modelling of sources, we compare variants of our method to a linear overcomplete NMF baseline. Experimental results will show that even in the absence of additional constraints, our models produce increasingly sparse and precise decompositions, according to two pertinent evaluation measures.
translated by 谷歌翻译
We introduce a novel way to incorporate prior information into (semi-) supervised non-negative matrix factorization, which we call differentiable dictionary search. It enables general, highly flexible and principled modelling of mixtures where non-linear sources are linearly mixed. We study its behavior on an audio decomposition task, and conduct an extensive, highly controlled study of its modelling capabilities.
translated by 谷歌翻译
与高维数据集的探索性分析(例如主成分分析(PCA))相反,邻居嵌入(NE)技术倾向于更好地保留高维数据的局部结构/拓扑。然而,保留局部结构的能力是以解释性为代价的:诸如T-分布的随机邻居嵌入(T-SNE)或统一的歧管近似和投影(UMAP)等技术没有提供拓扑结构的介绍(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)。在相应的嵌入中看到的群集)结构。在这里,我们提出了基于PCA,Q-残基和Hotelling的T2贡献的化学计量学领域的不同“技巧”,并结合了新型可视化方法,从而得出了邻居嵌入的局部和全局解释。我们展示了我们的方法如何使用标准的单变量或多变量方法来识别数据点组之间的歧视性特征。
translated by 谷歌翻译
随着时间的流逝,肿瘤体积和肿瘤特征的变化是癌症治疗的重要生物标志物。在这种情况下,FDG-PET/CT扫描通常用于癌症的分期和重新分期,因为放射性标记的荧光脱氧葡萄糖在高代谢的地区进行了。不幸的是,这些具有高代谢的区域不是针对肿瘤的特异性,也可以代表正常功能器官,炎症或感染的生理吸收,在这些扫描中使详细且可靠的肿瘤分割成为一项苛刻的任务。 AUTOPET挑战赛解决了这一研究差距,该挑战提供了来自900名患者的FDG-PET/CT扫描的公共数据集,以鼓励该领域进一步改善。我们对这一挑战的贡献是由两个最先进的分割模型组成的合奏,即NN-UNET和SWIN UNETR,并以最大强度投影分类器的形式增强,该分类器的作用像是门控机制。如果它预测了病变的存在,则两种分割都是通过晚期融合方法组合的。我们的解决方案在我们的交叉验证中诊断出患有肺癌,黑色素瘤和淋巴瘤的患者的骰子得分为72.12 \%。代码:https://github.com/heiligerl/autopet_submission
translated by 谷歌翻译
背景:基于学习的深度颈部淋巴结水平(HN_LNL)自动纤维与放射疗法研究和临床治疗计划具有很高的相关性,但在学术文献中仍被研究过。方法:使用35个规划CTS的专家划分的队列用于培训NNU-NEN 3D FULLES/2D-ENEBLEN模型,用于自动分片20不同的HN_LNL。验证是在独立的测试集(n = 20)中进行的。在一项完全盲目的评估中,3位临床专家在与专家创建的轮廓的正面比较中对深度学习自动分类的质量进行了评价。对于10个病例的亚组,将观察者内的变异性与深度学习自动分量性能进行了比较。研究了Autocontour与CT片平面方向的一致性对几何精度和专家评级的影响。结果:与专家创建的轮廓相比,对CT SLICE平面调整的深度学习分割的平均盲目专家评级明显好得多(81.0 vs. 79.6,p <0.001),但没有切片平面的深度学习段的评分明显差。专家创建的轮廓(77.2 vs. 79.6,p <0.001)。深度学习分割的几何准确性与观察者内变异性(平均骰子,0.78 vs. 0.77,p = 0.064)的几何准确性无关,并且在提高水平之间的准确性方面差异(p <0.001)。与CT切片平面方向一致性的临床意义未由几何精度指标(骰子,0.78 vs. 0.78 vs. 0.78,p = 0.572)结论:我们表明可以将NNU-NENE-NET 3D-FULLRES/2D-ENEMELBEND用于HN_LNL高度准确的自动限制仅使用有限的培训数据集,该数据集非常适合在研究环境中在HN_LNL的大规模标准化自动限制。几何准确度指标只是盲人专家评级的不完善的替代品。
translated by 谷歌翻译